33 research outputs found

    Optimization in the Natural Sciences: 30th Euro Mini-Conference, EmC-ONS 2014, Aveiro, Portugal, February 5-9, 2014: revised selected papers

    Get PDF
    This book constitutes the refereed proceedings of the 30th Euro Mini-Conference, EmC-ONS 2014, held in Aveiro, Portugal, in February 2014. The 13 revised full papers presented were carefully reviewed and selected from 70 submissions. The papers are organized in topical sections on dynamical systems; optimization and applications; modeling and statistical techniques for data analysis

    Focusing: coming to the point in metamaterials

    Full text link
    The point of the paper is to show some limitations of geometrical optics in the analysis of subwavelength focusing. We analyze the resolution of the image of a line source radiating in the Maxwell fisheye and the Veselago-Pendry slab lens. The former optical medium is deduced from the stereographic projection of a virtual sphere and displays a heterogeneous refractive index n(r) which is proportional to the inverse of 1+r^2. The latter is described by a homogeneous, but negative, refractive index. It has been suggested that the fisheye makes a perfect lens without negative refraction [Leonhardt, Philbin arxiv:0805.4778v2]. However, we point out that the definition of super-resolution in such a heterogeneous medium should be computed with respect to the wavelength in a homogenized medium, and it is perhaps more adequate to talk about a conjugate image rather than a perfect image (the former does not necessarily contains the evanescent components of the source). We numerically find that both the Maxwell fisheye and a thick silver slab lens lead to a resolution close to lambda/3 in transverse magnetic polarization (electric field pointing orthogonal to the plane). We note a shift of the image plane in the latter lens. We also observe that two sources lead to multiple secondary images in the former lens, as confirmed from light rays travelling along geodesics of the virtual sphere. We further observe resolutions ranging from lambda/2 to nearly lambda/4 for magnetic dipoles of varying orientations of dipole moments within the fisheye in transverse electric polarization (magnetic field pointing orthogonal to the plane). Finally, we analyse the Eaton lens for which the source and its image are either located within a unit disc of air, or within a corona 1<r<2 with refractive index n(r)=2/r−1n(r)=\sqrt{2/r-1}. In both cases, the image resolution is about lambda/2.Comment: Version 2: 22 pages, 11 figures. More figures added, additional cases discussed. Misprints corrected. Keywords: Maxwell fisheye, Eaton lens; Non-Euclidean geometry; Stereographic projection; Transformation optics; Metamaterials; Perfect lens. The last version appears at J. Modern Opt. 57 (2010), no. 7, 511-52

    Controlling surface plasmon polaritons in transformed coordinates

    Full text link
    Transformational optics allow for a markedly enhanced control of the electromagnetic wave trajectories within metamaterials with interesting applications ranging from perfect lenses to invisibility cloaks, carpets, concentrators and rotators. Here, we present a review of curved anisotropic heterogeneous meta-surfaces designed using the tool of transformational plasmonics, in order to achieve a similar control for surface plasmon polaritons in cylindrical and conical carpets, as well as cylindrical cloaks, concentrators and rotators of a non-convex cross-section. Finally, we provide an asymptotic form of the geometric potential for surface plasmon polaritons on such surfaces in the limit of small curvature.Comment: 14 pages, 9 figure

    A demonstration of 'broken' visual space

    Get PDF
    It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene

    Modelling human visual navigation using multi-view scene reconstruction

    Get PDF
    It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision

    Gravitational Lensing from a Spacetime Perspective

    Full text link

    Empiricism and the Geometry of Visual Space

    No full text
    corecore